home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Collection of Internet
/
Collection of Internet.iso
/
infosrvr
/
dev
/
www_talk.930
/
000835_guido@cwi.nl _Fri Apr 9 01:19:28 1993.msg
< prev
next >
Wrap
Internet Message Format
|
1994-01-24
|
5KB
Return-Path: <guido@cwi.nl>
Received: from dxmint.cern.ch by nxoc01.cern.ch (NeXT-1.0 (From Sendmail 5.52)/NeXT-2.0)
id AA28406; Fri, 9 Apr 93 01:19:28 MET DST
Received: from charon.cwi.nl by dxmint.cern.ch (5.65/DEC-Ultrix/4.3)
id AA14885; Fri, 9 Apr 1993 01:38:51 +0200
Received: from voorn.cwi.nl by charon.cwi.nl with SMTP
id AA14959 (5.65b/3.8/CWI-Amsterdam); Fri, 9 Apr 1993 01:38:49 +0200
Received: by voorn.cwi.nl with SMTP
id AA15043 (5.65b/3.8/CWI-Amsterdam); Fri, 9 Apr 1993 01:38:47 +0200
Message-Id: <9304082338.AA15043=guido@voorn.cwi.nl>
Subject: Re: strategy for HTML spec?
To: www-talk@nxoc01.cern.ch
X-Organization: CWI, Kruislaan 413, 1098 SJ Amsterdam, The Netherlands
X-Phone: +31 20 5924127 (work), +31 20 6225521 (home), +31 20 5924199 (fax)
Date: Fri, 09 Apr 1993 01:38:46 +0200
From: Guido van Rossum <Guido.van.Rossum@cwi.nl>
[This is the message I intended to redisatribute to the list,
describing my roaming robot. Maybe I should make the results of this
available as a searchable HTML document? --Guido]
>I could easily write a robot which would roam around the Web (perhaps
>stochastically?), and verify the html, using sgmls. Then, whenever
>I come across something that's non-compliant, I could automatically
>send mail to wwwmaster@sitename. No one would have to annoy anyone else
>about whether or not they've verified their HTML; a program would annoy
>them automatically.
I have written a robot that does this, except it doesn't check for
valid SGML -- it just tries to map out the entire web. I believe I
found roughly 50 or 60 different sites (this was maybe 2 months ago --
I'm sorry, I didn't save the output). It took the robot about half a
day (a saturday morning) to complete.
There were several problems.
First, some sites were down and my robot would spend a considerable
time waiting for the connection to time out each time it found a link
to such a site. I ended up remembering the last error from a site and
skipping sites that were obviously down, but there are many different
errors you can get, depending on whether the host is down,
unreachable, doesn't run a WWW server, doesn't recognize the document
address you want, or has some other trouble (some sites were going up
and down while my robot was running, causing additional confusion).
Next, more importantly, some sites have an infinite number of
documents. There are several causes for this.
First, several sites have gateways to the entire VMS documentation (I
have never used VMS but apparently the VMS help system is a kind of
hypertext). While not exactly infinite the number of nodes is *very*
large. Luckily such gateways are easily recognized by the kind of
pathname they use, and VMS help is unlikely to contain pointers to
anything except more VMS help, so I put in a simple trap to stop
these.
Next, there are other gateways. I can't remember whether I
encountered a Gopher or WAIS gateway, but these would have even worse
problems.
Finally, some servers contain bugs that cause loops, by referencing to
the same document with an ever-growing path. (The relative path
resolving rules are tricky, and I was using my own www client which
isn't derived from Tim's, which made this more severe, but I have also
found occurrences reproducible with the CERN www client.)
Although I didn't specifically test for bad HTML, I did have to parse
the HTML to find the links, and found occasional errors. I believe
there are a few binaries, PostScript and WP files that have links to
them, which take forever to fetch. There were also various
occurrences of broken addresses here and there -- this was a good
occasion for me to debug my www client library.
If people are interested, I could run the robot again and report a
summary of the results.
I also ran a gopher robot, but after 1600 sites I gave up... The
Veronica project in the Gopher world does the same and makes the
results available as a database, although the last time I tried it the
veronica server seemed too overloaded to respond to a simple query.
If you want source for the robots, the're part of the Python source
distribution: ftp to ftp.cwi.nl, directory, pub/python, file
python0.9.8.tar.Z. The robot (and in fact my entire www and gopher
client library) is in the tar archive in directory python/demo/www.
The texinfo to html conversion program that I once advertized here is
also there. (I'm sorry, you'll have to built the python interpreter
from the source before any of these programs can be used...)
Note that my www library isn't up with the latest HTML specs, this is
a hobby project and I neede my time for other things...
--Guido van Rossum, CWI, Amsterdam <Guido.van.Rossum@cwi.nl>